Goto

Collaborating Authors

 dgx superpod


NetApp Teams with NVIDIA to Accelerate AI and HPC Infrastructure

#artificialintelligence

NetApp, a global, cloud-led, data-centric software company, announced that NetApp EF600 all-flash NVMe storage combined with the BeeGFS parallel file system is now certified for NVIDIA DGX SuperPOD. The new certification simplifies artificial intelligence (AI) and high-performance computing (HPC) infrastructure to enable faster implementation of these use cases. Since 2018, NetApp and NVIDIA have served hundreds of customers with a range of solutions, from building AI Centers of Excellence to solving massive-scale AI training challenges. The qualification of NetApp EF600 and BeeGFS file system for DGX SuperPOD is the latest addition to a complete set of AI solutions that have been developed by the companies. NetApp's portfolio of NVIDIA-accelerated solutions includes ONTAP AI to eliminate guesswork for faster adoption by using a field-proven reference architecture as well as a preconfigured, integrated solution that is easy to procure and deploy in a turnkey manner.


NetApp Teams with NVIDIA to Accelerate HPC and AI with Turnkey Supercomputing Infrastructure

#artificialintelligence

NetApp, a global, cloud-led, data-centric software company, announced that NetApp EF600 all-flash NVMe storage combined with the parallel file system is now certified for NVIDIA DGX SuperPOD. The new certification simplifies artificial intelligence (AI) and high-performance computing (HPC) infrastructure to enable faster implementation of these use cases. Since 2018, NetApp and NVIDIA have served hundreds of customers with a range of solutions, from building AI Centers of Excellence to solving massive-scale AI training challenges. The qualification of NetApp EF600 and BeeGFS file system for DGX SuperPOD is the latest addition to a complete set of AI solutions that have been developed by the companies. "The NetApp and NVIDIA alliance has delivered industry-leading innovation for years, and this new qualification for NVIDIA DGX SuperPOD builds on that momentum," said Phil Brotherton, Vice President of Solutions and Alliances at NetApp.


Nvidia takes the wraps off Hopper, its latest GPU architecture

#artificialintelligence

Did you miss a session at the Data Summit? After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture, a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia's Ampere architecture, which launched roughly two years ago. The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that's designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia's MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs.


AI of the Storm: How We Built the Most Powerful Industrial Computer in the U.S. in Three Weeks During a Pandemic

#artificialintelligence

In under a month amid the global pandemic, a small team assembled the world's seventh-fastest computer. Today that mega-system, called Selene, communicates with its operators on Slack, has its own robot attendant and is driving AI forward in automotive, healthcare and natural-language processing. While many supercomputers tap exotic, proprietary designs that take months to commission, Selene is based on an open architecture NVIDIA shares with its customers. The Argonne National Laboratory, outside Chicago, is using a system based on Selene's DGX SuperPOD design to research ways to stop the coronavirus. The University of Florida will use the design to build the fastest AI computer in academia.


How NVIDIA Built A Supercomputer In Just 3 Weeks During Pandemic

#artificialintelligence

"Today that mega-system, called Selene, has its own robot attendant and is driving AI forward in automotive, healthcare and natural-language processing." Assembling supercomputers take years to build. It requires many service personnel working round the clock for many months to deliver a commission. But, beating all odds, NVIDIA claims to have built its supercomputer within three weeks. Not only did NVIDIA assemble a mammoth of a computer in a short time but also have broken records in the recently conducted MLPerf benchmark tests.


NVIDIA and Partners Bring AI Supercomputing to Enterprises NVIDIA Blog

#artificialintelligence

Academia, hyperscalers and scientific researchers have been big beneficiaries of high performance computing and AI infrastructure. Yet businesses have largely been on the outside looking in. NVIDIA DGX SuperPOD provides businesses a proven design formula for building and running enterprise-grade AI infrastructure with extreme scale. The reference architecture gives businesses a prescription to follow to avoid exhaustive, protracted design and deployment cycles and capital budget overruns. It's available as a consumable solution that now integrates with the leading names in data center IT -- including DDN, IBM, Mellanox and NetApp -- and is fulfilled through a network of qualified resellers.


NVIDIA Builds Supercomputer to Build Self-Driving Cars NVIDIA Blog

#artificialintelligence

In a clear demonstration of why AI leadership demands the best compute capabilities, NVIDIA today unveiled the world's 22nd fastest supercomputer -- DGX SuperPOD -- which provides AI infrastructure that meets the massive demands of the company's autonomous-vehicle deployment program. The system was built in just three weeks with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles. Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design. AI training of self-driving cars is the ultimate compute-intensive challenge.